423 research outputs found

    Alternating Minimization Algorithms for Dual-Energy X-Ray CT Imaging and Information Optimization

    Get PDF
    This dissertation contributes toward solutions to two distinct problems linked through the use of common information optimization methods. The first problem is the X-ray computed tomography (CT) imaging problem and the second is the computation of Berger-Tung bounds for the lossy distributed source coding problem. The first problem discussed through most of the dissertation is motivated by applications in radiation oncology, including dose prediction in proton therapy and brachytherapy. In proton therapy dose prediction, the stopping power calculation is based on estimates of the electron density and mean excitation energy. In turn, the estimates of the linear attenuation coefficients or the component images from dual-energy CT image reconstruction are used to estimate the electron density and mean excitation. Therefore, the quantitative accuracy of the estimates of the linear attenuation coefficients or the component images affects the accuracy of proton therapy dose prediction. In brachytherapy, photons with low energies (approximately 20 keV) are often used for internal treatment. Those photons are attenuated through their interactions with tissues. The dose distribution in the tissue obeys an exponential decay with the linear attenuation coefficient as the parameter in the exponential. Therefore, the accuracy of the estimates of the linear attenuation coefficients at low energy levels has strong influence on dose prediction in brachytherapy. Numerical studies of the regularized alternating minimization (DE-AM) algorithm with different regularization parameters were performed to find ranges of the parameters that can achieve the desired image quality in terms of estimation accuracy and image smoothness. The DE-AM algorithm is an extension of the AM algorithm proposed by O\u27Sullivan and Benac. Both simulated data and real data reconstructions, as well as system bias and variance experiments, were carried out to demonstrate that the DE-AM algorithm is incapable of reconstructing a high density material accurately with a limited number of iterations (1000 iterations with 33 ordered subsets). This slow convergence phenomenon was then studied via a toy. or scaled-down problem, indicating a highly ridged objective function. Motivated by the studies which demonstrate the slow convergence of the DE-AM algorithm, a new algorithm, the linear integral alternating minimization (LIAM) algorithm was developed, which estimates the linear integrals of the component images first; then the component images can be recovered by an expectation-maximization (EM) algorithm or linear regression methods. Both simulated and real data were reconstructed by the LIAM algorithm while varying the regularization parameters to ascertain good choices ( &delta= 500; &lambda= 50 for I0 = 100000 scenario). The results from the DE-AM algorithm applied to the same data were used for comparison. While using only 1/10 of the computation time of the DE-AM algorithm, the LIAM algorithm achieves at least a two-fold improvement in the relative absolute error of the component images in the presence of Poisson noise. This work also explored the reconstruction of image differences from tomographic Poisson data. An alternating minimization algorithm was developed and monotonic decrease in the objective function was achieved for each iteration. Simulations with random images and tomographic data were presented to demonstrate that the algorithm can recover the difference images with 100% accuracy in the number of and identity of pixels which differ. An extension to 4D CT with simulated tomographic data was also presented and an approach to 4D PET was described. Different approaches for X-ray adaptive sensing were also proposed and reconstructions of simulated data were computed to test these approaches. Early simulation results show improved image reconstruction performance in terms of normalized L2 norm error compared to a non-adaptive sensing method. For the second problem, an optimization and computational approach was described for characterizing the inner and outer bounds for the achievable rate regions for distributed source coding, known as Berger-Tung inner and outer bounds. Several two-variable examples were presented to demonstrate the computational capability of the algorithm. For each problem considered that has a sum of distortions on the encoded variables, the inner and outer bound regions coincided. For a problem defined by Wagner and Anantharam with a single joint distortion for the two variables, their gap was observed in our results. These boundary regions can motivate hypothesized optimal distributions which can be tested in the first order necessary conditions for the optimal distributions

    Employment of Reduplicated Words in E-C Translation of Children’s Literature: A Case Study of Ren Rongrong’s Translation of The Wind in the Willows

    Get PDF
    This thesis is to explore the application of reduplicated words in Chinese translation of children’s literature. It selects Ren Rongrong’s translation of The Wind in the Willows to conduct research. First, data collection and analysis of reduplicated words in the text are carried out. Then, from the three effects of reduplicated words, rhythmical effect, imaging effect, and emotional effect, the corresponding sentences in Ren Rongrong’s translation of The Wind in the Willows are selected and analyzed. Finally, it is concluded that reduplicated words are widely used in the English-Chinese translation of children’s literature. In the process of English-Chinese translation, verbs, adjectives, adverbs, nouns, quantifiers, numerals and onomatopoeia can be translated into reduplicated words when necessary. On the one hand, it can add language charm and make the translation more vivid, and on the other hand, it is easy for children to accept.

    Spatiotemporal Patterns Induced by Turing-Hopf Interaction and Symmetry on a Disk

    Full text link
    Turing bifurcation and Hopf bifurcation are two important kinds of transitions giving birth to inhomogeneous solutions, in spatial or temporal ways. On a disk, these two bifurcations may lead to equivariant Turing-Hopf bifurcations. In this paper, normal forms for three kinds of Turing-Hopf bifurcations are given and the breathing, standing wave-like, and rotating wave-like patterns are found in numerical examples

    Research on the Relationship between Online Reviews and Customer Purchase Intention: The Moderating Role of Personality Trait

    Get PDF
    As an important factor that affects customer purchase intention, online review has attracted the attention from both enterprises and researchers. According to persuasion theory, planned behavior theory and regulatory focus theory, combined with the three dimensions of online reviews, we construct a modified model of the influence of online reviews on customer purchase intention, and put forward relevant theoretical assumptions. Based on data from 252 samples, this paper studies the relationship between online reviews and customer purchase intention, and further reveals the moderating effect of personality traits

    The Future of ChatGPT-enabled Labor Market: A Preliminary Study

    Full text link
    As a phenomenal large language model, ChatGPT has achieved unparalleled success in various real-world tasks and increasingly plays an important role in our daily lives and work. However, extensive concerns are also raised about the potential ethical issues, especially about whether ChatGPT-like artificial general intelligence (AGI) will replace human jobs. To this end, in this paper, we introduce a preliminary data-driven study on the future of ChatGPT-enabled labor market from the view of Human-AI Symbiosis instead of Human-AI Confrontation. To be specific, we first conduct an in-depth analysis of large-scale job posting data in BOSS Zhipin, the largest online recruitment platform in China. The results indicate that about 28% of occupations in the current labor market require ChatGPT-related skills. Furthermore, based on a large-scale occupation-centered knowledge graph, we develop a semantic information enhanced collaborative filtering algorithm to predict the future occupation-skill relations in the labor market. As a result, we find that additional 45% occupations in the future will require ChatGPT-related skills. In particular, industries related to technology, products, and operations are expected to have higher proficiency requirements for ChatGPT-related skills, while the manufacturing, services, education, and health science related industries will have lower requirements for ChatGPT-related skills

    TBFormer: Two-Branch Transformer for Image Forgery Localization

    Full text link
    Image forgery localization aims to identify forged regions by capturing subtle traces from high-quality discriminative features. In this paper, we propose a Transformer-style network with two feature extraction branches for image forgery localization, and it is named as Two-Branch Transformer (TBFormer). Firstly, two feature extraction branches are elaborately designed, taking advantage of the discriminative stacked Transformer layers, for both RGB and noise domain features. Secondly, an Attention-aware Hierarchical-feature Fusion Module (AHFM) is proposed to effectively fuse hierarchical features from two different domains. Although the two feature extraction branches have the same architecture, their features have significant differences since they are extracted from different domains. We adopt position attention to embed them into a unified feature domain for hierarchical feature investigation. Finally, a Transformer decoder is constructed for feature reconstruction to generate the predicted mask. Extensive experiments on publicly available datasets demonstrate the effectiveness of the proposed model.Comment: 5 pages, 3 figure

    Generative Adversarial Mapping Networks

    Full text link
    Generative Adversarial Networks (GANs) have shown impressive performance in generating photo-realistic images. They fit generative models by minimizing certain distance measure between the real image distribution and the generated data distribution. Several distance measures have been used, such as Jensen-Shannon divergence, ff-divergence, and Wasserstein distance, and choosing an appropriate distance measure is very important for training the generative network. In this paper, we choose to use the maximum mean discrepancy (MMD) as the distance metric, which has several nice theoretical guarantees. In fact, generative moment matching network (GMMN) (Li, Swersky, and Zemel 2015) is such a generative model which contains only one generator network GG trained by directly minimizing MMD between the real and generated distributions. However, it fails to generate meaningful samples on challenging benchmark datasets, such as CIFAR-10 and LSUN. To improve on GMMN, we propose to add an extra network FF, called mapper. FF maps both real data distribution and generated data distribution from the original data space to a feature representation space R\mathcal{R}, and it is trained to maximize MMD between the two mapped distributions in R\mathcal{R}, while the generator GG tries to minimize the MMD. We call the new model generative adversarial mapping networks (GAMNs). We demonstrate that the adversarial mapper FF can help GG to better capture the underlying data distribution. We also show that GAMN significantly outperforms GMMN, and is also superior to or comparable with other state-of-the-art GAN based methods on MNIST, CIFAR-10 and LSUN-Bedrooms datasets.Comment: 9 pages, 7 figure
    • …
    corecore